Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title Interactive carrots and sticks to improve data quality
Year 2011
Access date 30.11.2012
Abstract

Interactive web questionnaires promise to improve survey measurement relative to more static modes, whether online or paper. By designing questionnaires that react to respondent actions associated with reduced data quality, it may be possible to promote behavior that leads to improved quality. We are investigating this type of interactivity by giving feedback to respondents (“speeders”) when they answer so fast they cannot realistically have read the question let alone thought about the answer (“You seem to have responded very quickly. Please be sure you have given the question sufficient thought to provide an accurate answer.”). In prior research (reported at AAPOR, 2009) we observed that, overall, speeders answered questions about quantities (e.g., “Overall, how many overnight trips have you taken in the PAST 2 YEARS?”) more slowly when they were prompted than when they were not. We assume the slowdown reflects improved data quality – they also were less likely to straightline on later grid items – but we cannot be sure without a direct measure of response accuracy. In the current research we explored the relationship between response time and quality by prompting speeders on simple numeracy items for which we could determine response accuracy (e.g., “If the chance of getting a disease is 10%, how many people out of 100 would be expected to get the disease: 1, 10 or 20?”). Half of the respondents were prompted whenever they answered faster than 300 msec per word for each of seven numeracy items; the other respondents were never prompted. Because the prompt is relatively punitive in tone – in effect, chastising respondents for speeding – we also tried to motivate respondents at the outset to be conscientious when answering. Half of the respondents were asked to commit to reading the question carefully and thinking about the answer before submitting it; the other respondents were not asked to commit. By crossing prompting (yes or no) with commitment (yes or no) we produced four experimental conditions to which 2565 were randomly assigned. As in the earlier studies, respondents in the prompt group answered more slowly overall than the no prompt group. The prompt also increased response accuracy for a subset of respondents with moderate levels of education (Some College/Associates Degree). These respondents were 5% more accurate (55 vs. 50%) when they were prompted than not. Those with more education were able to speed while accurately responding and those with the least education were inaccurate despite answering more slowly. The prompts reduced straightlining in later grid questions across different education levels suggesting the intervention was taken to heart even when it did not affect accuracy of answers to the numeracy questions. And as in the earlier studies, the intervention did not increase breakoffs. Commitment had a complementary effect. It slowed responses as much as did the prompt (there were main effects of both prompting and commitment on response time but no interaction) and increased accuracy for respondents with the highest levels of education: those with a Bachelor’s Degree increased from 60 to 65% and those with a Masters or more increased from 64 to 71% when they committed to careful responding.  Like the prompting, commitment did not increase breakoffs. So it may be that both approaches – carrots and sticks – used together can promote more thoughtful and, ultimately, more accurate answers across the pool of respondents. While reliable, the advantage conferred by prompting and commitment on accuracy is modest (5 to 6%). Nonetheless, given concerns about data quality in web surveys – especially when nonprobability panels are used –gains of this size are a welcome improvement and may be larger for other types of items for which aptitude is less central. In future research we will explore interventions for other behaviors beside speeding, e.g., primacy effects, straightlining and conditioning. The fact that the prompting for speeding did not increase breakoffs suggests respondents may tolerate these other kinds of intervention as well.

Access/Direct link

Workshop Homepage (abstract) / (presentation)

Year of publication2011
Bibliographic typeConferences, workshops, tutorials, presentations
Full text availabilityFurther details
Print

Web survey bibliography - 2011 (358)

Page:
Page: